Bayes Theorem

Terms from Artificial Intelligence: humans at the heart of algorithms

Bayes theorem is a part of probability theory that relates conditional probabilities and can be used to update beliefs (expressed as probabilities) based on new evidence. This is used in a number of approaches in AI including Bayesian network. To be precise Bayes theorem says:
      P( X | E )   =   ( P( E | X ) P(X) ) / sum ( P( E | Xi ) P(Xi) )
where X is some unknown state (such as "it is raining") the sum is taken over all possible states Xi (e.g. "it is dry", "it is snowing") and E is some other observable (often interpreted as evidence, such as "there's a wet umbrella in the hall-stand"). It effectively allows one to turn conditional probabiliies round.

When being used to update beliefs using evidence the probabilities P(Xi) are known as the prior distribution, whereas P(Xj|E) is the {{posterior distribution}.

Used in Chap. 3: pages 28, 29, 33, 34; Chap. 12: pages 171, 176, 181; Chap. 18: page 276; Chap. 19: page 309

Also known as Bayesian reasoning